10 research outputs found

    CES-513 Stages for Developing Control Systems using EMG and EEG Signals: A survey

    Get PDF
    Bio-signals such as EMG (Electromyography), EEG (Electroencephalography), EOG (Electrooculogram), ECG (Electrocardiogram) have been deployed recently to develop control systems for improving the quality of life of disabled and elderly people. This technical report aims to review the current deployment of these state of the art control systems and explain some challenge issues. In particular, the stages for developing EMG and EEG based control systems are categorized, namely data acquisition, data segmentation, feature extraction, classification, and controller. Some related Bio-control applications are outlined. Finally a brief conclusion is summarized.

    Head movement and facial expression-based human-machine interface for controlling an intelligent wheelchair

    No full text
    This paper presents a human machine interface (HMI) for hands-free control of an electric powered wheelchair (EPW) based on head movements and facial expressions detected by using the gyroscope and 'cognitiv suite' of an Emotiv EPOC device, respectively. The proposed HMI provides two control modes: 1) control mode 1 uses four head movements to display in its graphical user interface the control commands that the user wants to execute and one facial expression to confirm its execution; 2) control mode 2 employs two facial expressions for turning and forward motion, and one head movement for stopping the wheelchair. Therefore, both control modes offer hands-free control of the wheelchair. Two subjects have used the two control modes to operate a wheelchair in an indoor environment. Five facial expressions have been tested in order to determine if the users can employ different facial expressions for executing the commands. The experimental results show that the proposed HMI is reliable for operating the wheelchair safely

    Glossokinetic potential based tongue-machine interface for 1-D extraction

    No full text
    The tongue is an aesthetically useful organ located in the oral cavity. It can move in complex ways with very little fatigue. Many studies on assistive technologies operated by tongue are called tongue-human computer interface or tongue-machine interface (TMI) for paralyzed individuals. However, many of them are obtrusive systems consisting of hardware such as sensors and magnetic tracer placed in the mouth and on the tongue. Hence these approaches could be annoying, aesthetically unappealing and unhygienic. In this study, we aimed to develop a natural and reliable tongue-machine interface using solely glossokinetic potentials via investigation of the success of machine learning algorithms for 1-D tongue-based control or communication on assistive technologies. Glossokinetic potential responses are generated by touching the buccal walls with the tip of the tongue. In this study, eight male and two female naive healthy subjects, aged 22-34 years, participated. Linear discriminant analysis, support vector machine, and the k-nearest neighbor were used as machine learning algorithms. Then the greatest success rate was achieved an accuracy of 99% for the best participant in support vector machine. This study may serve disabled people to control assistive devices in natural, unobtrusive, speedy and reliable manner. Moreover, it is expected that GKP-based TMI could be alternative control and communication channel for traditional electroencephalography (EEG)-based brain-computer interfaces which have significant inadequacies arisen from the EEG signals
    corecore